Compiler Support for Sparse Tensor Computations in MLIR

نویسندگان

چکیده

Sparse tensors arise in problems science, engineering, machine learning, and data analytics. Programs that operate on such can exploit sparsity to reduce storage requirements computational time. Developing maintaining sparse software by hand, however, is a complex error-prone task. Therefore, we propose treating as property of tensors, not tedious implementation task, letting compiler generate code automatically from sparsity-agnostic definition the computation. This article discusses integrating this idea into MLIR.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Compiler Support for Optimizing Tensor Contraction Expressions in Quantum Chemistry Computations

This paper provides an overview of compile-time optimizations developed in the context of a program synthesis system for a class of quantum chemistry computations. These computations are expressible as a set of tensor contractions and arise in electronic structure modeling. The synthesis system will take as input a high-level specification of the computation and generate high-performance parall...

متن کامل

HPF-2 Support for Dynamic Sparse Computations

There is a class of sparse matrix computations, such as direct solvers of systems of linear equations, that change the fill-in (nonzero entries) of the coefficient matrix, and involve row and column operations (pivoting). This paper addresses the problem of the parallelization of these sparse computations from the point of view of the parallel language and the compiler. Dynamic data structures ...

متن کامل

Memory Hardware Support for Sparse Computations

Address computations and indirect, hence double, memory accesses in sparse matrix application software render sparse computations to be ine cient in general. In this paper we propose memory architectures that support the storage of sparse vectors and matrices. In a rst design, called vector storage, a matrix is handled as an array of sparse vectors, stored as singly-linked lists. Deletion and i...

متن کامل

Compiler Optimizations for I/O-Intensive Computations

This paper describes transformation techniques for out-of-core programs (i.e., those that deal with very large quantities of data) based on exploiting locality using a combination of loop and data transformations. Writing efficient out-of-core program is an arduous task. As a result, compiler optimizations directed at improving I/O performance are becoming increasingly important. We describe ho...

متن کامل

HPF Library, Language and Compiler Support for Shadow Edges in Data Parallel Irregular Computations

On distributed memory architectures data parallel compilers emulate the global address space by distributing the data onto the processors according to the mapping directives of the user and by generating explicit inter-processor communication automatically. A shadow is additionally allocated local memory to keep on one processor also non-local values of the data that is accessed or deened by th...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: ACM Transactions on Architecture and Code Optimization

سال: 2022

ISSN: ['1544-3973', '1544-3566']

DOI: https://doi.org/10.1145/3544559